Cadenza Lyric Intelligibility Prediction Challenge (CLIP)
Dear colleague, It gives us great pleasure to pre-announce the next Cadenza Challenge for music processing and hearing difference. This autumn we will be running the Cadenza Lyric Intelligibility Prediction Challenge (CLIP). We're hoping this will be accepted as an ICASSP 2026 Grand Challenge.
The Challenge
To develop better music processing through machine learning, we need reliable way to automatically evaluate the audio. For music with lyrics, then we need a metric to evaluate the intelligibility of the sung words. The metric would come from a predictive model that takes as input audio and estimates the lyric intelligibility score that someone would achieve in a listening test.
With the development of large language models and foundation models for speech and music, there is great potential to significantly improve the current state-of-the-art. The music will be genres like pop and rock. Some of this will be as-is, other will be passed through a hearing loss simulator to mimic listeners with hearing loss but not wearing hearing aids.
What will be provided
- Training, evaluation and test tests of music.
- Ground truth lyric intelligibility from listening tests.
- Software tools including a baseline system.
Important Dates
All dates are to be intended anywhere on earth time (AoE) and are provisional.
- 1st September 2025: Launch of challenge, release of data.
- 1st November 2025: Release of evaluation data and opening of submission window.
- 1st December 2025: Submission deadline. All entrants must have submitted their predictions plus a draft of their technical report.
- If this is accepted as an ICASSP Grand Challenge
- 7th December 2025. Invited papers for ICASSP session
- 2-8 May 2026. Session at ICASSP 2026
We will know whether this is an ICASSP grand challenge in July 2025.
Stay informed
To stay informed please sign up to our Google Group
Organisers
- Michael A. Akeroyd, University of Nottingham
- Scott Bannister, University of Leeds
- Jon P Barker, University of Sheffield
- Trevor J. Cox, University of Salford
- Bruno Fazenda, University of Salford
- Jennifer Firth, University of Nottingham
- Simone Graetzer, University of Salford
- Alinka Greasley, University of Leeds
- Gerardo Roa-Dabike, University of Sheffield
- Rebecca R. Vos, University of Salford
- William M. Whitmer, University of Nottingham
Funded by
Engineering and Physical Sciences Research Council (EPSRC), UK
Partners
RNID, Google, Logitech, Sonova, BBC R&D, Oldenburg University.